Goto

Collaborating Authors

 monash university


Uncovering Students' Inquiry Patterns in GenAI-Supported Clinical Practice: An Integration of Epistemic Network Analysis and Sequential Pattern Mining

Wei, Jiameng, Dang, Dinh, Yang, Kaixun, Stokes, Emily, Mazeh, Amna, Lim, Angelina, Dai, David Wei, Moore, Joel, Fan, Yizhou, Gasevic, Danijela, Gasevic, Dragan, Chen, Guanliang

arXiv.org Artificial Intelligence

Assessment of medication history-taking has traditionally relied on human observation, limiting scalability and detailed performance data. While Generative AI (GenAI) platforms enable extensive data collection and learning analytics provide powerful methods for analyzing educational traces, these approaches remain largely underexplored in pharmacy clinical training. This study addresses this gap by applying learning analytics to understand how students develop clinical communication competencies with GenAI-powered virtual patients -- a crucial endeavor given the diversity of student cohorts, varying language backgrounds, and the limited opportunities for individualized feedback in traditional training settings. We analyzed 323 students' interaction logs across Australian and Malaysian institutions, comprising 50,871 coded utterances from 1,487 student-GenAI dialogues. Combining Epistemic Network Analysis to model inquiry co-occurrences with Sequential Pattern Mining to capture temporal sequences, we found that high performers demonstrated strategic deployment of information recognition behaviors. Specifically, high performers centered inquiry on recognizing clinically relevant information, integrating rapport-building and structural organization, while low performers remained in routine question-verification loops. Demographic factors including first-language background, prior pharmacy work experience, and institutional context, also shaped distinct inquiry patterns. These findings reveal inquiry patterns that may indicate clinical reasoning development in GenAI-assisted contexts, providing methodological insights for health professions education assessment and informing adaptive GenAI system design that supports diverse learning pathways.


Robust Anomaly Detection in O-RAN: Leveraging LLMs against Data Manipulation Attacks

Dayaratne, Thusitha, Pham, Ngoc Duy, Vo, Viet, Lai, Shangqi, Abuadbba, Sharif, Suzuki, Hajime, Yuan, Xingliang, Rudolph, Carsten

arXiv.org Artificial Intelligence

The introduction of 5G and the Open Radio Access Network (O-RAN) architecture has enabled more flexible and intelligent network deployments. However, the increased complexity and openness of these architectures also introduce novel security challenges, such as data manipulation attacks on the semi-standardised Shared Data Layer (SDL) within the O-RAN platform through malicious xApps. In particular, malicious xApps can exploit this vulnerability by introducing subtle Unicode-wise alterations (hypoglyphs) into the data that are being used by traditional machine learning (ML)-based anomaly detection methods. These Unicode-wise manipulations can potentially bypass detection and cause failures in anomaly detection systems based on traditional ML, such as AutoEncoders, which are unable to process hypoglyphed data without crashing. We investigate the use of Large Language Models (LLMs) for anomaly detection within the O-RAN architecture to address this challenge. We demonstrate that LLM-based xApps maintain robust operational performance and are capable of processing manipulated messages without crashing. While initial detection accuracy requires further improvements, our results highlight the robustness of LLMs to adversarial attacks such as hypoglyphs in input data. There is potential to use their adaptability through prompt engineering to further improve the accuracy, although this requires further research. Additionally, we show that LLMs achieve low detection latency (under 0.07 seconds), making them suitable for Near-Real-Time (Near-RT) RIC deployments.


Australia has been hesitant – but could robots soon be delivering your pizza?

The Guardian

Robots zipping down footpaths may sound futuristic, but they are increasingly being put to work making deliveries around the world – though a legal minefield and cautious approach to new tech means they are largely absent in Australia. Retail and food businesses have been using robots for a variety of reasons, with hazard detection robots popping up in certain Woolworths stores and virtual waiters taking dishes from kitchens in understaffed restaurants to hungry diners in recent years. Overseas, in jurisdictions such as California, robots are far more visible in everyday life. Following on from the first wave of self-driving car trials in cities such as San Francisco, humans now also share footpaths with robots. Likened to lockers on wheels, companies including Serve Robotics and Coco have partnered with Uber Eats and Doordash, which have armies of robots travelling along footpaths in Los Angeles delivering takeaway meals and groceries.


Leveraging Full Dependency Parsing Graph Information For Biomedical Event Extraction

Noravesh, Farshad, Haffari, Reza, Fang, Ong Huey, Soon, Layki, Rajalana, Sailaja, Pal, Arghya

arXiv.org Artificial Intelligence

Many models are proposed in the literature on biomedical event extraction(BEE). Some of them use the shortest dependency path(SDP) information to represent the argument classification task. There is an issue with this representation since even missing one word from the dependency parsing graph may totally change the final prediction. To this end, the full adjacency matrix of the dependency graph is used to embed individual tokens using a graph convolutional network(GCN). An ablation study is also done to show the effect of the dependency graph on the overall performance. The results show a significant improvement when dependency graph information is used. The proposed model slightly outperforms state-of-the-art models on BEE over different datasets.


Modifying AI, Enhancing Essays: How Active Engagement with Generative AI Boosts Writing Quality

Yang, Kaixun, Raković, Mladen, Liang, Zhiping, Yan, Lixiang, Zeng, Zijie, Fan, Yizhou, Gašević, Dragan, Chen, Guanliang

arXiv.org Artificial Intelligence

Students are increasingly relying on Generative AI (GAI) to support their writing-a key pedagogical practice in education. In GAI-assisted writing, students can delegate core cognitive tasks (e.g., generating ideas and turning them into sentences) to GAI while still producing high-quality essays. This creates new challenges for teachers in assessing and supporting student learning, as they often lack insight into whether students are engaging in meaningful cognitive processes during writing or how much of the essay's quality can be attributed to those processes. This study aimed to help teachers better assess and support student learning in GAI-assisted writing by examining how different writing behaviors, especially those indicative of meaningful learning versus those that are not, impact essay quality. Using a dataset of 1,445 GAI-assisted writing sessions, we applied the cutting-edge method, X-Learner, to quantify the causal impact of three GAI-assisted writing behavioral patterns (i.e., seeking suggestions but not accepting them, seeking suggestions and accepting them as they are, and seeking suggestions and accepting them with modification) on four measures of essay quality (i.e., lexical sophistication, syntactic complexity, text cohesion, and linguistic bias). Our analysis showed that writers who frequently modified GAI-generated text-suggesting active engagement in higher-order cognitive processes-consistently improved the quality of their essays in terms of lexical sophistication, syntactic complexity, and text cohesion. In contrast, those who often accepted GAI-generated text without changes, primarily engaging in lower-order processes, saw a decrease in essay quality. Additionally, while human writers tend to introduce linguistic bias when writing independently, incorporating GAI-generated text-even without modification-can help mitigate this bias.


From Complexity to Parsimony: Integrating Latent Class Analysis to Uncover Multimodal Learning Patterns in Collaborative Learning

Yan, Lixiang, Gašević, Dragan, Zhao, Linxuan, Echeverria, Vanessa, Jin, Yueqiao, Martinez-Maldonado, Roberto

arXiv.org Artificial Intelligence

Multimodal Learning Analytics (MMLA) leverages advanced sensing technologies and artificial intelligence to capture complex learning processes, but integrating diverse data sources into cohesive insights remains challenging. This study introduces a novel methodology for integrating latent class analysis (LCA) within MMLA to map monomodal behavioural indicators into parsimonious multimodal ones. Using a high-fidelity healthcare simulation context, we collected positional, audio, and physiological data, deriving 17 monomodal indicators. LCA identified four distinct latent classes: Collaborative Communication, Embodied Collaboration, Distant Interaction, and Solitary Engagement, each capturing unique monomodal patterns. Epistemic network analysis compared these multimodal indicators with the original monomodal indicators and found that the multimodal approach was more parsimonious while offering higher explanatory power regarding students' task and collaboration performances. The findings highlight the potential of LCA in simplifying the analysis of complex multimodal data while capturing nuanced, cross-modality behaviours, offering actionable insights for educators and enhancing the design of collaborative learning interventions. This study proposes a pathway for advancing MMLA, making it more parsimonious and manageable, and aligning with the principles of learner-centred education.


Adaptive Transformer Modelling of Density Function for Nonparametric Survival Analysis

Zhang, Xin, Mehta, Deval, Hu, Yanan, Zhu, Chao, Darby, David, Yu, Zhen, Merlo, Daniel, Gresle, Melissa, Van Der Walt, Anneke, Butzkueven, Helmut, Ge, Zongyuan

arXiv.org Artificial Intelligence

The primary task of survival analysis is to determine the timing of one or multiple events, which can signify the moment of a mechanical system malfunction, the period of transition from corporate deficit to surplus, the instance of patient fatality or so on, depending on the specific circumstance (Lee and Whitmore, 2006). Among all scenarios, survival analysis for medical data poses the most severe challenges (Collett, 2023). Some medical datasets are longitudinal, as exemplified by electronic health records (EHRs), where multiple observations of each patient's covariates over time are recorded. Survival models must be capable of handling such measurements and learning from their continuous temporal trends. Moreover, observations in longitudinal data are often sparse, necessitating the effective handling of missing values for any reliable survival model, even when the missing rates are exceedingly high (Singer and Willett, 1991). Additionally, censoring represents a fundamental aspect of survival data, referring to cases in which complete information regarding the survival time or event occurrence of a subject is not fully observed or available within the study period (Leung et al, 1997).


From Prediction to Action: Critical Role of Performance Estimation for Machine-Learning-Driven Materials Discovery

Boley, Mario, Luong, Felix, Teshuva, Simon, Schmidt, Daniel F, Foppa, Lucas, Scheffler, Matthias

arXiv.org Artificial Intelligence

Materials discovery driven by statistical property models is an iterative decision process, during which an initial data collection is extended with new data proposed by a model-informed acquisition function--with the goal to maximize a certain "reward" over time, such as the maximum property value discovered so far. While the materials science community achieved much progress in developing property models that predict well on average with respect to the training distribution, this form of in-distribution performance measurement is not directly coupled with the discovery reward. This is because an iterative discovery process has a shifting reward distribution that is over-proportionally determined by the model performance for exceptional materials. We demonstrate this problem using the example of bulk modulus maximization among double perovskite oxides. We find that the in-distribution predictive performance suggests random forests as superior to Gaussian process regression, while the results are inverse in terms of the discovery rewards. We argue that the lack of proper performance estimation methods from pre-computed data collections is a fundamental problem for improving data-driven materials discovery, and we propose a novel such estimator that, in contrast to na\"ive reward estimation, successfully predicts Gaussian processes with the "expected improvement" acquisition function as the best out of four options in our demonstrational study for double perovskites. Importantly, it does so without requiring the over thousand ab initio computations that were needed to confirm this prediction.


Human-AI Collaboration in Thematic Analysis using ChatGPT: A User Study and Design Recommendations

Yan, Lixiang, Echeverria, Vanessa, Nieto, Gloria Fernandez, Jin, Yueqiao, Swiecki, Zachari, Zhao, Linxuan, Gašević, Dragan, Martinez-Maldonado, Roberto

arXiv.org Artificial Intelligence

Generative artificial intelligence (GenAI) offers promising potential for advancing human-AI collaboration in qualitative research. However, existing works focused on conventional machine-learning and pattern-based AI systems, and little is known about how researchers interact with GenAI in qualitative research. This work delves into researchers' perceptions of their collaboration with GenAI, specifically ChatGPT. Through a user study involving ten qualitative researchers, we found ChatGPT to be a valuable collaborator for thematic analysis, enhancing coding efficiency, aiding initial data exploration, offering granular quantitative insights, and assisting comprehension for non-native speakers and non-experts. Yet, concerns about its trustworthiness and accuracy, reliability and consistency, limited contextual understanding, and broader acceptance within the research community persist. We contribute five actionable design recommendations to foster effective human-AI collaboration. These include incorporating transparent explanatory mechanisms, enhancing interface and integration capabilities, prioritising contextual understanding and customisation, embedding human-AI feedback loops and iterative functionality, and strengthening trust through validation mechanisms.


A Lab Just 3D-Printed a Neural Network of Living Brain Cells

WIRED

You can 3D-print nearly anything: rockets, mouse ovaries, and for some reason, lamps made of orange peels. Now, scientists at Monash University in Melbourne, Australia, have printed living neural networks composed of rat brain cells that seem to mature and communicate like real brains do. Researchers want to create mini-brains partly because they could someday offer a viable alternative to animal testing in drug trials and studies of basic brain function. At the start of 2023, the US Congress passed an annual spending bill pushing scientists to reduce their use of animals in federally funded research, following the signing of the US Food and Drug Administration's Modernization Act 2.0, which allowed high-tech alternatives in drug safety trials. Rather than testing new drugs on thousands of animals, pharmaceutical companies could apply them to 3D-printed mini-brains--in theory.